Goto

Collaborating Authors

 comment and suggestion


We thank the reviewers for their comments and suggestions, which will help us better present our work

Neural Information Processing Systems

We thank the reviewers for their comments and suggestions, which will help us better present our work. We will include the comparisons in the camera ready, if accepted. We agree that Charades represents a good dataset for evaluation. Next we will perform experiments on Charades and present them in future work. More detailed analysis and discussion: We thank the reviewer for this suggestion. We will include computation times in the final version.


Review # 1: Thanks for your comments and suggestions

Neural Information Processing Systems

Therefore, our algorithms can be extended to more general semiring based graphical models. We will fix all typos and presentation issues. We will extend the discussion in the paper to clarify this point.



Rev# 1: We thank the reviewer for the encouraging remarks

Neural Information Processing Systems

Rev#1: We thank the reviewer for the encouraging remarks. Why the attention mechanism is only based on the Y series..... latent factors X,F?: This is a good observation indeed Details about what the loss function is specifically to train the mean and residual forecasters in the local models?: These details are implicitly specified in Algorithm 1. The residual forecaster is trained using the same loss, but with respect to the true residual values in the future time-range. We will add a text description in our revised version.




We thank the reviewers for their comments and suggestions, which will help us better present our work

Neural Information Processing Systems

We thank the reviewers for their comments and suggestions, which will help us better present our work. We will include the comparisons in the camera ready, if accepted. We agree that Charades represents a good dataset for evaluation. Next we will perform experiments on Charades and present them in future work. More detailed analysis and discussion: We thank the reviewer for this suggestion. We will include computation times in the final version.




Is Model Editing Built on Sand? Revealing Its Illusory Success and Fragile Foundation

Liu, Wei, Xu, Haomei, Liu, Bingqing, Deng, Zhiying, Wang, Haozhao, Wang, Jun, Li, Ruixuan, Teh, Yee Whye, Lee, Wee Sun

arXiv.org Artificial Intelligence

Large language models (LLMs) inevitably encode outdated or incorrect knowledge. Updating, deleting, and forgetting such knowledge is important for alignment, safety, and other issues. To address this issue, model editing has emerged as a promising paradigm: by precisely editing a small subset of parameters such that a specific fact is updated while preserving other knowledge. Despite its great success reported in previous papers, we find the apparent reliability of editing rests on a fragile foundation and the current literature is largely driven by illusory success. The fundamental goal of steering the model's output toward a target with minimal modification would encourage exploiting hidden shortcuts, rather than utilizing real semantics. This problem directly challenges the feasibility of the current model editing literature at its very foundation, as shortcuts are inherently at odds with robust knowledge integration. Coincidentally, this issue has long been obscured by evaluation frameworks that lack the design of negative examples. To uncover it, we systematically develop a suite of new evaluation methods. Strikingly, we find that state-of-the-art approaches collapse even under the simplest negation queries. Our empirical evidence shows that editing is likely to be based on shortcuts rather than full semantics, calling for an urgent reconsideration of the very basis of model editing before further advancements can be meaningfully pursued.